7 research outputs found

    Solutions of linear equations and a class of nonlinear equations using recurrent neural networks

    Get PDF
    Artificial neural networks are computational paradigms which are inspired by biological neural networks (the human brain). Recurrent neural networks (RNNs) are characterized by neuron connections which include feedback paths. This dissertation uses the dynamics of RNN architectures for solving linear and certain nonlinear equations. Neural network with linear dynamics (variants of the well-known Hopfield network) are used to solve systems of linear equations, where the network structure is adapted to match properties of the linear system in question. Nonlinear equations inturn are solved using the dynamics of nonlinear RNNs, which are based on feedforward multilayer perceptrons. Neural networks are well-suited for implementation on special parallel hardware, due to their intrinsic parallelism. The RNNs developed here are implemented on a neural network processor (NNP) designed specifically for fast neural type processing, and are applied to the inverse kinematics problem in robotics, demonstrating their superior performance over alternative approaches

    Efficient Numerical Inversion Using Multilayer Feedforward Neural Networks

    No full text
    An efficient second-order optimization algorithm for the numerical inversion of nonlinear functions using feedforward neural networks is presented. After a function has been successfully learned by a neural network, the (nonlinear) equation is solved using the neural network representation, in effect, performing a numerical inversion process. As is well known, second-order information can dramatically improve the convergence of numerical methods. It is demonstrated, that the algorithm developed herein, based on a neural network, also dramatically reduces the amount of computations required, hence yielding a more efficient numerical inversion process. Certain constraints must also be considered. We discuss these issues, present an example and briefly characterize an applicable class of problems.

    Cooperative Control of Unmanned Vehicle Formations

    No full text
    We review the enabling theory for the decentralized and cooperative control of formations of unmanned, autonomous vehicles. The decentralized and cooperative formation control approach combines recent results from dynamical system theory, control theory, and algebraic graph theory. The stability of vehicle formations is discussed, and the applicability of the technology concept to a variety of applications is demonstrated.

    On Matching ANN Structure to Problem Domain Structure

    Get PDF
    To achieve reduced training time and improved generalization with artificial neural networks (ANN, or NN), it is important to use a reduced complexity NN structure. A "problem" is defined by constraints among the variables describing it. If knowledge about these constraints could be obtained a priori, this could be used to reduce the complexity of the ANN before training it. Systems theory literature contains methods for determining and representing structural aspects of constrained data (these methods are herein called GSM, general systems method). The suggestion here is to use the GSM model of the given data as a pattern for modularizing a NN prior to training it. The present work assumes the GSM model for the given problem context has been determined (represented here in the form of Boolean functions of known decompositions). This means that certain information is available about constraint among the system variables, and is used to develop a modularized NN. The modularized NN and an equivalent general NN (full interconnect, feed-forward NN) are both trained on the same data. Various predictions are offered: 1) The general NN and the modularized NN will both learn the task, but the modularized NN will learn it faster. 2) If trained on an (appropriate) subset of possible inputs, the modularized NN will perform better generalization than the general NN. 3) If trained on a non-decomposable function of the same variables, the general NN will learn the task, but the modularized NN will not. All of these predictions are verified experimentally. Future work will explore more decomposition types and more general data types
    corecore